Quality Management¶
At TrainingData.io, we believe, quality of the machine learning model is entirely dependent on the quality of labeling for training data. We help data-scientists control quality of data labeling in following ways:
Annotator's Performance Management¶
- Measure, record and analyze annotator's performance on every individual task, asset and label.
- Compare performance of multiple annotators on same task.
- Distribute labeling work among multiple labelers and observe consensus among their work.
- Seed annotation tasks with golden data set. Report performance of annotator on golden data set.
Pixel Accurate tools¶
- We use advanced image processing to build pixel accurate tools like segmentation, freehand, growth tools.
- Build high quality user experience for annotators that allows then to do high quality work at fast pace.
- Enable smooth collaboration between annotators and data-scientists.
State of the art User Experience (UX)¶
- TD.io believes quality of work performed by annotators depends on quality of user experience in annotation tools
- We build intuitive user experience for annotators.
Labeling Instruction Builder¶
- Empowering data scientist to take control user experience of annotators.
- Labeling classes and ontology needs to be defined in detail.
- Data scientist needs precise view of how labeling interface appears to the annotator.